Learnability , representation , and language

نویسندگان

  • Amy Perfors
  • Joshua Tenenbaum
  • Matthew Wilson
چکیده

Within the metaphor of the “mind as a computation device” that dominates cognitive science, understanding human cognition means understanding learnability – not only what (and how) the brain learns, but also what data is available to it from the world. Ideal learnability arguments seek to characterize what knowledge is in theory possible for an ideal reasoner to acquire, which illuminates the path towards understanding what human reasoners actually do acquire. The goal of this thesis is to exploit recent advances in machine learning to revisit three common learnability arguments in language acquisition. By formalizing them in Bayesian terms and evaluating them given realistic, real-world datasets, we achieve insight about what must be assumed about a child’s representational capacity, learning mechanism, and cognitive biases. Exploring learnability in the context of an ideal learner but realistic (rather than ideal) datasets enables us to investigate what could be learned in practice rather than noting what is impossible in theory. Understanding how higher-order inductive constraints can themselves be learned permits us to reconsider inferences about innate inductive constraints in a new light. And realizing how a learner who evaluates theories based on a simplicity/goodness-of-fit tradeoff can handle sparse evidence may lead to a new perspective on how humans reason based on the noisy and impoverished data in the world. The learnability arguments I consider all ultimately stem from the impoverishment of the input – either because it lacks negative evidence, it lacks a certain essential kind of positive evidence, or it lacks sufficient quantity of evidence necessary for choosing from an infinite set of possible generalizations. I focus on these learnability arguments in the context of three major topics in language acquisition: the acquisition of abstract linguistic knowledge about hierarchical phrase structure, the acquisition of verb argument structures, and the acquisition of word leaning biases. Thesis Supervisor: Joshua Tenenbaum Title: Associate Professor of Cognitive Science

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learnability of the Classic Knowledge Representation Language

Much of the work in inductive learning considers languages that represent hypotheses using some restricted form of propositional logic; an important research problem is extending propo-sitional learning algorithms to use more expressive rst-order representations. Further, it is desirable for these algorithms to have solid theoretical foundations, so that their behavior is reliable and predictab...

متن کامل

‘Ideal learning’ of natural language: Positive results about learning from positive evidence

Gold’s [1967. Language identification in the limit. Information and Control, 16, 447–474] celebrated work on learning in the limit has been taken, by many cognitive scientists, to have powerful negative implications for the learnability of language from positive data (i.e., from mere exposure to linguistic input). This provides one, of several, lines of argument that language acquisition must d...

متن کامل

A Learnable Constraint-based Grammar Formalism

Lexicalized Well-Founded Grammar (LWFG) is a recently developed syntacticsemantic grammar formalism for deep language understanding, which balances expressiveness with provable learnability results. The learnability result for LWFGs assumes that the semantic composition constraints are learnable. In this paper, we show what are the properties and principles the semantic representation and gramm...

متن کامل

Programming Research Group A LEARNABILITY MODEL FOR UNIVERSAL REPRESENTATIONS

This paper deenes a new computational model of inductive learning, called U-learnability (Universal Learnability), that is well-suited for rich representation languages, most notably for universal (Turing equivalent) representations. It is motivated by three observations. Firstly, existing computational models of inductive learning|the best known of which are identiication in the limit and PAC-...

متن کامل

Case - Based Representation and LearningofPattern

Pattern languages seem to suit case-based reasoning particularly well. Therefore, the problem of inductively learning pattern languages is paraphrased in a case-based manner. A careful investigation requires a formal semantics for case bases together with similarity measures in terms of formal languages. Two basic semantics are introduced and investigated. It turns out that representabil-ity pr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008